Accurate Parallel Floating-Point Accumulation
نویسندگان
چکیده
منابع مشابه
Accurate Floating - Point Summation ∗
Given a vector of floating-point numbers with exact sum s, we present an algorithm for calculating a faithful rounding of s into the set of floating-point numbers, i.e. one of the immediate floating-point neighbors of s. If the s is a floating-point number, we prove that this is the result of our algorithm. The algorithm adapts to the condition number of the sum, i.e. it is very fast for mildly...
متن کاملAccurate floating point summation∗
We present and analyze several simple algorithms for accurately summing n floating point numbers S = ∑n i=1 si, independent of how much cancellation occurs in the sum. Let f be the number of significant bits in the si. We assume a register is available with F > f significant bits. Then assuming that (1) n ≤ b2F−f/(1 − 2−f )c + 1, (2) rounding is to nearest, (3) no overflow occurs, and (4) all u...
متن کاملReproducible Parallel Floating-Point Computations
Because of rounding errors, floating-point operations such as addition and multiplication are not associative, computed results depend also on the order of computation. Therefore we cannot get the same answer from run-to-run even on the same machine with varying number of available processors. That makes understanding the reliability of output harder, especially with the increasing level of par...
متن کاملAccurate floating-point summation: a new approach
The aim of this paper is to find an accurate and efficient algorithm for evaluating the summation of large sets of floating-point numbers. We present a new representation of the floating-point number system in which a number is represented as a linear combination of integers and the coefficients are powers of the base of the floating-point system. The approach allows to build up an accurate flo...
متن کاملThe Complexity of Accurate Floating Point Computation
Our goal is to find accurate and efficient algorithms, when they exist, for evaluating rational expressions containing floating point numbers, and for computing matrix factorizations (like LU and the SVD) of matrices with rational expressions as entries. More precisely, accuracy means the relative error in the output must be less than one (no matter how tiny the output is), and efficiency means...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Computers
سال: 2016
ISSN: 0018-9340
DOI: 10.1109/tc.2016.2532874